Sparse Matrix-Vector Multiplication and Conjugate Gradient Algorithm on Hybrid Computing Platforms
نویسندگان
چکیده
The solution to a nonsingular linear system Ax=b lies in a Krylov space whose dimension is the degree of the minimal polynomial of A (where A is a matrix, x & b are vectors). If this minimal polynomial of A has a low degree, a Krylov method has the potential of rapid convergence [1]. When solving a system of linear equations Ax=b, if the coefficient matrix A is large and sparse, the time required to solve the system by direct methods is too high and requires too much storage.
منابع مشابه
Algorithm for Sparse Approximate Inverse Preconditioners in the Conjugate Gradient Method
We propose a method for preconditioner construction and parallel implementations of the Preconditioned Conjugate Gradient algorithm on GPU platforms. The preconditioning matrix is an approximate inverse derived from an algorithm for the iterative improvement of a solution to linear equations. Using a sparse matrix-vector product, our preconditioner is well suited for massively parallel GPU arch...
متن کاملData-parallel programming with Intel Array Building Blocks (ArBB)
Intel Array Building Blocks is a high-level data-parallel programming environment designed to produce scalable and portable results on existing and upcoming multiand many-core platforms. We have chosen several mathematical kernels a dense matrix-matrix multiplication, a sparse matrix-vector multiplication, a 1-D complex FFT and a conjugate gradients solver as synthetic benchmarks and representa...
متن کاملFast Conjugate Gradients with Multiple GPUs
The limiting factor for efficiency of sparse linear solvers is the memory bandwidth. In this work, we describe a fast Conjugate Gradient solver for unstructured problems, which runs on multiple GPUs installed on a single mainboard. The solver achieves double precision accuracy with single precision GPUs, using a mixed precision iterative refinement algorithm. To achieve high computation speed, ...
متن کاملSegmented Operations for Sparse Matrix Computation on Vector Multiprocessors
In this paper we present a new technique for sparse matrix multiplication on vector multiprocessors based on the efficient implementation of a segmented sum operation. We describe how the segmented sum can be implemented on vector multiprocessors such that it both fully vectorizes within each processor and parallelizes across processors. Because of our method’s insensitivity to relative row siz...
متن کاملPreconditioned Conjugate Gradient
.. ................................................................................................................... ix Chapter 1. Introduction ..................................................................................................1 Chapter 2. Background ..................................................................................................6 2.1. Matrix Compu...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007